Spectral Analysis of Linear Operators

Definition: Let A:VVA:V \to V be a linear transformation defined over vector space VV. A subspace WW of VV is called an invariant under AA if A(x)WA(x) \in W for all xWx \in W.

Let AA be a linear transformation defined over R2\mathbb{R}^2 such that A(x,y)=(x+y,xy)A(x,y) = (x+y, x-y). Then, W={(x,0)R2xR}W = \{(x,0) \in \mathbb{R}^2 \mid x \in \mathbb{R}\} is an invariant subspace under AA.


Example: R(A)R(A) is an invariant subspace under AA.

Solution: Let xR(A)x \in R(A). Then, x=Ayx = Ay for some yVy \in V. Then, Ax=A(Ay)=A2yR(A)Ax = A(Ay) = A^2y \in R(A).


Example: Let M=span{[11]}M = \text{s}pan \bigg\{ \begin{bmatrix} 1 \\ 1 \end{bmatrix}\bigg \} and A=[1221]A = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix}. is MM an invariant subspace under AA

Solution: Let x=[x1x1]Mx = \begin{bmatrix} x_1 \\ x_1 \end{bmatrix} \in M. Then, Ax=[x1+2x12x1+x1]=[3x13x1]=3x1[11]MAx = \begin{bmatrix} x_1 + 2x_1 \\ 2x_1 + x_1 \end{bmatrix} = \begin{bmatrix} 3x_1 \\ 3x_1 \end{bmatrix} = 3 x_1 \begin{bmatrix} 1 \\ 1 \end{bmatrix} \in M. Thus, MM is an invariant subspace under AA.


Example: N(A)N(A) is an invariant subspace under AA.

Solution: Let xN(A)x \in N(A). Then, Ax=0N(A)Ax = 0 \in N(A).


Definition: Powers of a linear transformation AA are defined as follows:

Ak=A(A((Ax)))k timesA^k = \underbrace{A(A(\cdots(Ax)\cdots))}_{k \text{ times}}

By using this definition, polynomials of a linear transformation AA can be constructed as linear combinations of powers of AA

p(A)=α0An+α1An1++αn1A+αnIp(A) = \alpha_0 A^n + \alpha_1 A^{n-1} + \cdots + \alpha_{n-1}A + \alpha_n I

where II is the identity transformation and α0,α1,,αn\alpha_0, \alpha_1, \cdots, \alpha_n are scalars.

Property: A p(A)=p(A) A     A \ p(A) = p(A) \ A \ \implies A commutes with any polynomial of AA.


Example: Show that A2=2A+3IA^2 = 2A + 3I for A=[1221]A = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix}.

Solution: A2=[1221][1221]=[5445]=2[1221]+[3003]=2A+3IA^2 = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix} \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix} = \begin{bmatrix} 5 & 4 \\ 4 & 5 \end{bmatrix} = 2 \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix} + \begin{bmatrix} 3 & 0 \\ 0 & 3 \end{bmatrix} = 2A + 3I.


Example: Show that R(p(A)) and N(p(A))R(p(A)) \text{ and } N(p(A)) are invariant subspaces under AA for any polynomial p(A)p(A).

Solution: Let xR(p(A))x \in R(p(A)). Then, x=p(A)yx = p(A)y for some yVy \in V. Then, Ax=Ap(A)y=p(A)(Ay)R(p(A))Ax = Ap(A)y = p(A)(Ay) \in R(p(A)). Thus, R(p(A))R(p(A)) is an invariant subspace under AA.

Let xN(p(A))x \in N(p(A)). Then, p(A)x=0p(A)x = 0. Then, (p(A)A)x=(Ap(A))x=A0=0(p(A)A)x = (Ap(A))x = A0 = 0. Thus, AxN(p(A))Ax \in N(p(A)). Thus, N(p(A))N(p(A)) is an invariant subspace under AA.


Definition: Let AA denote the matrix representation of a linear transformation A:VVA:V \to V with AA being an n×nn \times n matrix. Then, the eigenvalues of AA are the roots of the characteristic polynomial of AA.

det(sIA)=0\det(sI - A) = 0
λi=eigenvalues for A\lambda_i = \text{eigenvalues for } A
 in other words, roots of det(sIA)=0\text{ in other words, roots of } \det(sI - A) = 0

Definition: Vectors eiVe_i \in V satisfying Aei=λieiAe_i = \lambda_i e_i are called eigenvectors of AA corresponding to the eigenvalues λi\lambda_i.

Example: Find the eigenvalues and eigenvectors of A=[1221]A = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix}.

Solution: det(sIA)=det[s122s1]=(s1)24=s22s3=(s3)(s+1)=0\det(sI - A) = \det \begin{bmatrix} s-1 & -2 \\ -2 & s-1 \end{bmatrix} = (s-1)^2 - 4 = s^2 - 2s - 3 = (s-3)(s+1) = 0. Thus, λ1=3\lambda_1 = 3 and λ2=1\lambda_2 = -1.

For λ1=3,[1221][x1x2]=3[x1x2]    [2222][x1x2]=0    x1=x2    [x1x1]=x1[11]\text{For } \lambda_1 = 3, \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = 3 \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \implies \begin{bmatrix} -2 & 2 \\ 2 & -2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = 0 \implies x_1 = x_2 \implies \begin{bmatrix} x_1 \\ x_1 \end{bmatrix} = x_1 \begin{bmatrix} 1 \\ 1 \end{bmatrix}
For λ2=1,[1221][x1x2]=1[x1x2]    [2222][x1x2]=0    x1=x2    [x1x1]=x1[11]\text{For } \lambda_2 = -1, \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = -1 \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \implies \begin{bmatrix} 2 & 2 \\ 2 & 2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = 0 \implies x_1 = -x_2 \implies \begin{bmatrix} x_1 \\ -x_1 \end{bmatrix} = x_1 \begin{bmatrix} 1 \\ -1 \end{bmatrix}
Thus, e1=[11] and e2=[11] are the eigenvectors of A corresponding to λ1=3 and λ2=1 respectively.\text{Thus, } e_1 = \begin{bmatrix} 1 \\ 1 \end{bmatrix} \text{ and } e_2 = \begin{bmatrix} 1 \\ -1 \end{bmatrix} \text{ are the eigenvectors of } A \text{ corresponding to } \lambda_1 = 3 \text{ and } \lambda_2 = -1 \text{ respectively.}
Note that e1 and e2 are linearly independent.\text{Note that } e_1 \text{ and } e_2 \text{ are linearly independent.}
Also, [11] and [11] are orthogonal.\text{Also, } \begin{bmatrix} 1 \\ 1 \end{bmatrix} \text{ and } \begin{bmatrix} 1 \\ -1 \end{bmatrix} \text{ are orthogonal.}
Thus, R2=span{[11]}span{[11]}\text{Thus, } \mathbb{R}^2 = \text{span} \bigg \{ \begin{bmatrix} 1 \\ 1 \end{bmatrix}\bigg \} \oplus \text{span} \bigg \{ \begin{bmatrix} 1 \\ -1 \end{bmatrix}\bigg \}

Theorem: Consider the linear tranasformation y=Axy = Ax with AA being an n×nn \times n matrix. Suppose that

I. Cn=M1M2Mk\mathbb{C}^n = M_1 \oplus M_2 \oplus \cdots \oplus M_k

II. MiM_i is an invariant subspace under AA for i=1,2,,ki = 1, 2, \cdots, k

Then, the transformation AA can be represented as a block diagonal matrix.

Aˉ=[Aˉ1000Aˉ2000Aˉk]\bar A = \begin{bmatrix} \bar A_1 & 0 & \cdots & 0 \\ 0 & \bar A_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \bar A_k \end{bmatrix}

where Aˉ=P1AP\bar A = P^{-1}AP

P=[p1p2pk]P = \begin{bmatrix} p_1 & p_2 & \cdots & p_k \end{bmatrix}

pi=[ei1ei2eini]p_i = \begin{bmatrix} e_i^1 & e_i^2 & \cdots & e_i^{n_i} \end{bmatrix}

nin_i is the dimension of MiM_i and eije_i^j is the jthj^{th} eigenvector of AA corresponding to the eigenvalue λi\lambda_i.


Example: Let A=[111132001]A = \begin{bmatrix} 1 & 1 & -1 \\ -1 & 3 & -2 \\ 0 & 0 & 1 \end{bmatrix}. M1=span{[110],[010]}M_1 =\text{span} \bigg \{ \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} \bigg \} and M2=span{[011]}M_2 = \text{span} \bigg \{ \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} \bigg \} are invariant subspaces under AA.

1- Is M1M_1 invariant under AA?

2- Is M2M_2 invariant under AA?

3- Change the basis in both domain and codomain to {b11,b12,b21}\{ b_1^1, b_1^2, b_2^1 \}

Solution: 1- Let xM1x \in M_1. Then, x=α1[110]+α2[010]x = \alpha_1 \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} + \alpha_2 \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}. Then, Ax=[2α1+α22α1+3α20]=2α1+α2[110]+2α2[010]M1Ax = \begin{bmatrix} 2\alpha_1 + \alpha_2 \\ 2\alpha_1 + 3\alpha_2 \\ 0 \end{bmatrix} = 2\alpha_1 + \alpha_2 \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} + 2\alpha_2 \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} \in M_1. Thus, M1M_1 is an invariant subspace under AA.

2- Let xM2x \in M_2. Then, x=α1[011]x = \alpha_1 \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}. Then, Ax=[0α1α1]=α1[011]M2Ax = \begin{bmatrix} 0 \\ \alpha_1 \\ \alpha_1 \end{bmatrix} = \alpha_1 \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} \in M_2. Thus, M2M_2 is an invariant subspace under AA.

3- P=[100111001]P = \begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} and P1=[100111001]P^{-1} = \begin{bmatrix} 1 & 0 & 0 \\ -1 & 1 & -1 \\ 0 & 0 & 1 \end{bmatrix}

Aˉ=P1AP=[100111001][111132001][100111001]=[210020001]\bar A = P^{-1}AP = \begin{bmatrix} 1 & 0 & 0 \\ -1 & 1 & -1 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 1 & -1 \\ -1 & 3 & -2 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 2 & 1 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{bmatrix}


Example: Let A=[100121003]A = \begin{bmatrix} 1 & 0 & 0 \\ -1 & 2 & 1 \\ 0 & 0 & 3 \end{bmatrix}. Find the eigenvalues and eigenvectors of AA.

Solution: det(sIA)=det[s1001s2100s3]=(s1)(s2)(s3)=0\det(sI - A) = \det \begin{bmatrix} s-1 & 0 & 0 \\ 1 & s-2 & -1 \\ 0 & 0 & s-3 \end{bmatrix} = (s-1)(s-2)(s-3) = 0. Thus, λ1=1\lambda_1 = 1, λ2=2\lambda_2 = 2 and λ3=3\lambda_3 = 3.

For λ1=1,N(AI)=N([000111002])=span{[110]}=e1\text{For } \lambda_1 = 1, N(A-I) = N \bigg ( \begin{bmatrix} 0 & 0 & 0 \\ -1 & 1 & 1 \\ 0 & 0 & 2 \end{bmatrix} \bigg ) = \text{span} \bigg \{ \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}\bigg \} = e_1

For λ2=2,N(A2I)=N([100101001])=span{[010]}=e2\text{For } \lambda_2 = 2, N(A-2I) = N \bigg ( \begin{bmatrix} -1 & 0 & 0 \\ -1 & 0 & 1 \\ 0 & 0 & 1 \end{bmatrix} \bigg ) = \text{span} \bigg \{ \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}\bigg \} = e_2

For λ3=3,N(A3I)=N([200111000])=span{[011]}=e3\text{For } \lambda_3 = 3, N(A-3I) = N \bigg ( \begin{bmatrix} -2 & 0 & 0 \\ -1 & -1 & 1 \\ 0 & 0 & 0 \end{bmatrix} \bigg ) = \text{span} \bigg \{ \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}\bigg \} = e_3

A is diagonalizable since e1,e2,e3e_1, e_2, e_3 are linearly independent. PAˉ=APP \bar A = AP

P=[100111001]P = \begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} and

[e1e2e3][100020003]=[100121003][e1e2e3]\begin{bmatrix} e_1 & e_2 & e_3 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ -1 & 2 & 1 \\ 0 & 0 & 3 \end{bmatrix} \begin{bmatrix} e_1 & e_2 & e_3 \end{bmatrix}

Aˉ=[100020003]\bar A = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{bmatrix}


Theorem: Let AA be an n×nn \times n matrix with distinct eigenvalues λ1,λ2,,λk\lambda_1, \lambda_2, \cdots, \lambda_k. Then, the eigenvectors e1,e2,,eke_1, e_2, \cdots, e_k corresponding to λ1,λ2,,λk\lambda_1, \lambda_2, \cdots, \lambda_k are linearly independent.

λiλj    ei when ij \lambda_i \neq \lambda_j \implies e_i \text{ when } i \neq j

Then the set of eigenvectors {e1,e2,,ek}\{e_1, e_2, \cdots, e_k\} for a linearly independent set. Moreover,

span{ei}=N(AλiI)\text{span} \{e_i\} = N(A - \lambda_i I)
span{e1,e2,,ek}=Cn=N(Aλ1I)N(Aλ2I)N(AλkI)\text{span} \{e_1, e_2, \cdots, e_k\} = \mathbb{C}^n= N(A - \lambda_1 I) \oplus N(A - \lambda_2 I) \oplus \cdots \oplus N(A - \lambda_k I)
Aˉ=P1AP=[λ1000λ2000λk]\bar A = P^{-1}AP = \begin{bmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_k \end{bmatrix}

Proof: Can be found in the textbook.


#EE501 - Linear Systems Theory at METU